Goto

Collaborating Authors

 local reference frame


Object-centric Task Representation and Transfer using Diffused Orientation Fields

Bilaloglu, Cem, Löw, Tobias, Calinon, Sylvain

arXiv.org Artificial Intelligence

Curved objects pose a fundamental challenge for skill transfer in robotics: unlike planar surfaces, they do not admit a global reference frame. As a result, task-relevant directions such as "toward" or "along" the surface vary with position and geometry, making object-centric tasks difficult to transfer across shapes. To address this, we introduce an approach using Diffused Orientation Fields (DOF), a smooth representation of local reference frames, for transfer learning of tasks across curved objects. By expressing manipulation tasks in these smoothly varying local frames, we reduce the problem of transferring tasks across curved objects to establishing sparse keypoint correspondences. DOF is computed online from raw point cloud data using diffusion processes governed by partial differential equations, conditioned on keypoints. We evaluate DOF under geometric, topological, and localization perturbations, and demonstrate successful transfer of tasks requiring continuous physical interaction such as inspection, slicing, and peeling across varied objects. We provide our open-source codes at our website https://github.com/idiap/diffused_fields_robotics




The proposed algorithm is a unique combination of a GCN and a novel rotation-invariant local

Neural Information Processing Systems

We appreciate positive and constructive comments and address the main concerns raised by the reviewers below. Note that our training procedure takes the original 3D points and, consequently, is free from information loss. The manual feature extraction steps in RIConv and ClusterNet may incur the loss and lead to performance degradation. Therefore, the accuracy of A-CNN on z/SO(3) is as low as 35.8% according to our experiment based CNN (G-CNN) [A4] are designed for meshes and their target tasks are different from ours. In practice, computing PCAs at every level does not affect the overall accuracy at all.





Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant

Spinner, Jonas, Favaro, Luigi, Lippmann, Peter, Pitz, Sebastian, Gerhartz, Gerrit, Plehn, Tilman, Hamprecht, Fred A.

arXiv.org Machine Learning

Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach to geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models surpass state-of-the-art accuracy on relevant particle physics tasks, while being $4\times$ faster and using $5$-$100\times$ fewer FLOPs.


Review for NeurIPS paper: Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud

Neural Information Processing Systems

Weaknesses: The idea of estimating and relying on local reference frame to achieve rotation invariance has been explored before in similar context, thus might downgrade the novelty of this paper. For example, "A-CNN: Annularly Convolutional Neural Networks on Point Clouds, CVPR'19" uses the local point set to estimate the normal as this paper does, the difference is that A-CNN uses this normal to project 3d points into 2d plane, however, the basic idea of them is both to achieve locally rotation invariance. "Relation-Shape Convolutional Neural Network for Point Cloud Analysis, CVPR'19" mentioned in their experiments about rotation invariance that they construct a local reference frame to achieve rotation invariant representation of local point set which is the same as this paper. The randomized technique is also a common technique in training deep networks for exploring a larger data space or parameter space. The whole hierarchy is identical to PointNet .